Scalable Sparse Covariance Estimation via Self-Concordance

نویسندگان

  • Anastasios Kyrillidis
  • Rabeeh Karimi Mahabadi
  • Quoc Tran-Dinh
  • Volkan Cevher
چکیده

We consider the class of convex minimization problems, composed of a self-concordant function, such as the log det metric, a convex data fidelity term h(·) and, a regularizing – possibly non-smooth – function g(·). This type of problems have recently attracted a great deal of interest, mainly due to their omnipresence in top-notch applications. Under this locally Lipschitz continuous gradient setting, we analyze the convergence behavior of proximal Newton schemes with the added twist of a probable presence of inexact evaluations. We prove attractive convergence rate guarantees and enhance state-of-the-art optimization schemes to accommodate such developments. Experimental results on sparse covariance estimation show the merits of our algorithm, both in terms of recovery efficiency and complexity. Introduction Convex `1-regularized log det divergence criteria have been proven to produce – both theoretically and empirically – consistent modeling in diverse top-notch applications. The literature on the setup and utilization of such criteria is expanding with applications in Gaussian graphical learning (Dahl, Vandenberghe, and Roychowdhury 2008; Banerjee, El Ghaoui, and d’Aspremont 2008; Hsieh et al. 2011), sparse covariance estimation (Rothman 2012), Poissonbased imaging (Harmany, Marcia, and Willett 2012), etc. In this paper, we focus on the sparse covariance estimation problem. Particularly, let {xj}Nj=1 be a collection of nvariate random vectors, i.e., xj ∈ R, drawn from a joint probability distribution with covariance matrix Σ. In this context, assume there may exist unknown marginal independences among the variables to discover; we note that (Σ)kl = 0 when the k-th and l-th variables are independent. Here, we assume Σ is unknown and sparse, i.e., only a small number of entries are nonzero. Our goal is to recover the nonzero pattern of Σ, as well as compute a good approximation, from a (possibly) limited sample corpus. ∗This work is supported in part by the European Commission under Grant MIRG-268398, ERC Future Proof and SNF 200021132548, SNF 200021-146750 and SNF CRSII2-147633. Copyright c © 2014, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Mathematically, one way to approximate Σ is by solving:

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Well-Conditioned and Sparse Estimation of Covariance and Inverse Covariance Matrices Using a Joint Penalty

We develop a method for estimating well-conditioned and sparse covariance and inverse covariance matrices from a sample of vectors drawn from a sub-Gaussian distribution in high dimensional setting. The proposed estimators are obtained by minimizing the quadratic loss function and joint penalty of `1 norm and variance of its eigenvalues. In contrast to some of the existing methods of covariance...

متن کامل

JPEN Estimation of Covariance and Inverse Covariance Matrix A Well-Conditioned and Sparse Estimation of Covariance and Inverse Covariance Matrices Using a Joint Penalty

We develop a method for estimating well-conditioned and sparse covariance and inverse covariance matrices from a sample of vectors drawn from a sub-gaussian distribution in high dimensional setting. The proposed estimators are obtained by minimizing the quadratic loss function and joint penalty of `1 norm and variance of its eigenvalues. In contrast to some of the existing methods of covariance...

متن کامل

Innovated Scalable Efficient Estimation in Ultra - Large Gaussian Graphical Models

Large-scale precision matrix estimation is of fundamental importance yet challenging in many contemporary applications for recovering Gaussian graphical models. In this paper, we suggest a new approach of innovated scalable efficient estimation (ISEE) for estimating large precision matrix. Motivated by the innovated transformation, we convert the original problem into that of large covariance m...

متن کامل

0 Sparse Inverse Covariance Estimation

Recently, there has been focus on penalized loglikelihood covariance estimation for sparse inverse covariance (precision) matrices. The penalty is responsible for inducing sparsity, and a very common choice is the convex l1 norm. However, the best estimator performance is not always achieved with this penalty. The most natural sparsity promoting “norm” is the non-convex l0 penalty but its lack ...

متن کامل

Deblocking Joint Photographic Experts Group Compressed Images via Self-learning Sparse Representation

JPEG is one of the most widely used image compression method, but it causes annoying blocking artifacts at low bit-rates. Sparse representation is an efficient technique which can solve many inverse problems in image processing applications such as denoising and deblocking. In this paper, a post-processing method is proposed for reducing JPEG blocking effects via sparse representation. In this ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2014